Artificial intelligence is quickly transforming the fields of therapy and social work — from chatbots delivering mental health support to predictive algorithms identifying at-risk clients, and even robots designed to offer companionship. But with these innovations come complex ethical questions. In our course Artificial Intelligence in the Behavioral Health Professions: Cutting-edge Ethical and Risk Management Challenges, renowned ethics expert Dr. Frederick Reamer dives deep into these issues. Below, we’re sharing some of the thoughtful questions raised during the session, along with Dr. Reamer’s insightful responses.
Q: Clients are more than a collection of symptoms, and clients do not always voice what is going on underneath their words. AI cannot determine anything by empathy or intuition. AI-generated data is gleaned from surface data points. A therapeutic relationship requires a connection that is more than observing data. Isn't this a concern, both ethically and politically?
A: I would say absolutely, I agree completely. I think AI has a lot to offer us, and I think there are those limitations, such as those reiterated in the question.
Q: If we use an AI note-generating tool, we're providing data that can be used to enhance large language models. Might that lead to our professional obsolescence?
A: I think it is highly, highly, highly unlikely that AI is going to put us out of business. I think AI can enhance what we do. I really don't think it's going to put us out of business. I think there will always be a demand for human beings who are therapists and counselors. I think ideally, AI would supplement what we do. But I don't think it's going to replace us. I really don't
Q: If a client is using AI to self-monitor, does the therapist have access to the results?
A: In my experience, the answer is yes. So the tools that I've seen, and again, I don't pretend that I know every single tool that's out there, but I know a lot the client accesses the AI tool on a smartphone, let's say, or using a wearable sensor and the agreement is that the therapist can sort of review that data that's meaningful clinical data for the therapist as well as for the client.
Q: What are your thoughts on the environmental impact of powering AI?
A: There is a lot of concern about the amount of energy that's required, the carbon footprint, and the impact on our environment. And as someone who cares about our environment, I trust you do too. If you read the literature on the use of AI, you will see that there's a lot of commentary on the environmental impact. These are energy hogs, and that matters, matters to me, and probably matters to a lot of you as well. There are people out there focusing on the environmental impact of AI.
Q: Are we obligated to use the results of AI?
A: Well, I can't answer for every setting, but I would say in the behavioral professions generally, there is no obligation to use AI. There's no obligation to use the results. It's an option. It's an option.
As AI continues to progress so too do the possibilities for its use for therapy and social work. As Dr. Reamer reminds us, these tools should enhance our work — not define it. Staying thoughtful and ethical in how we use AI will ensure we continue to put clients first. For more insights, explore our course Artificial Intelligence in the Behavioral Health Professions: Cutting-edge Ethical and Risk Management Challenges and check out our other Ethics courses! You can find these listed in the "Related Links" of this article.